Tracking musical patterns using Joint Accent Structure
نویسندگان
چکیده
Joint Accent Structure (jAS) is a construct that uses temporal relationships between different accents in a melodic pattern as indices of its complexity. Concordant patterns are ones in which the periodic recurrence of melodic accents form simple ratios with the period of temporal accents (e.g., 1:1, 1:2), whereas Discordant patterns have periods of melodic and temporal accents with a more complex accent period ratio (e.g., 3:2). Participants were told to selectively attend to and synchronize finger taps with accents in two experiments that examined attentional tracking to musical patterns having a "concordant" or "discordant" JAS. Results indicated that tapping was more variable with discordant than with concordant JAS patterns, both with respect to produced inter-accent time periods and with respect to the phase of taps relative to accent onsets. These findings are interpreted in terms of real time attending and its control by event time structure. In listening to music one often experiences a sense of temporal anticipation that is crucial to the impact of a piece. We contend that part of this experience derives from the establishment of regular accent relationships in that piece. An accent may be defined as any element in an auditory sequence (e.g., a tone) that stands out from others, usually because it disrupts the context established by surrounding elements (e.g., it is longer in time or higher in frequency). Accents arise from changes along different specifiable dimensions (such as time or frequency); we refer to an accent defined by a change along one dimension as a distinct accent "type." Instances of various accent types may mark out distinct time periods within a musical event, lending to it a coherent temporal structure. For example, a coherent melodic line may designate regularly occurring melodic-type accents by virtue of changes in contour, pitch distance or implied harmony, as this information arises from distinct movements of pitch in time. Similarly, a melody's rhythm may designate a series of temporal-type accents through lengthened tones and/or rests. Much research has been directed toward understanding the effects of single accent types (e.g., melodic only) on the perception of musical events. In contrast, we know little about the temporal coherence of more intricate patterns that result from combinations of different accent types. Indeed, the relationships between different accents in time may contribute to the temporal coherence of a pattern. When different accent types combine in a musical event, they outline a time structure that has been termed that event's "Joint Accent Structure" (Jones, 1987, 1993). Different Joint Accent Structures (jAS's) result when the positioning of accents form different higher order time relationships. The present research examines the role of different JAS's in real time attending to simple musical events. We suggest that listeners who are sensitive to accented relationships respond directly to time intervals between various accents within a JAS, and may use higher-order invariant aspects of this time structure to guide attending in a preparatory fashion. In this way a JAS enables listeners to monitor unfolding events in time. Consider, for instance, the first few bars of Beethoven's fifth symphony shown in Figure 1. The fourth note of the piece receives an accent from two different sources: an increase in duration (a temporal accent) and a change in pitch (a melodic accent). For this excerpt, the Joint Accent Structure created by the co-occurrence of a temporal and a melodic accent provides a clear time structure, one characterized by a strongly marked and recurrent time period. This invariant periodicity, in turn, creates a strong expectation for future accents, which Beethoven fulfills in the next several measures. The preceding example outlines a relatively simple JAS. Here, two different accent types (melodic, temporal) coincide to outline a common recurrent time period. In more complex JASs, different accents specify different, possibly conflicting, time relationships. When such time relationships conflict, the accent types involved may neither coincide in time nor outline the same recurrent Canadian Journal of Experimental Psychology, 1997,51:4,271-290
منابع مشابه
Mind the Peak: When Museum is Temporarily Understood as Musical in Australian English
Intonation languages signal pragmatic functions (e.g. information structure) by means of different pitch accent types. Acoustically, pitch accent types differ in the alignment of pitch peaks (and valleys) in regard to stressed syllables, which makes the position of pitch peaks an unreliable cue to lexical stress (even though pitch peaks and lexical stress often coincide in intonation languages)...
متن کاملFrom MIDI to Traditional Musical Notation
In this paper a system that is designed to extract the musical score from a MIDI performance is described. The proposed system comprises of a number of modules that perform the following tasks: identification of elementary musical objects, calculation of accent (salience) of musical events, beat induction, beat tracking, onset quantisation, streaming, duration quantisation and pitch spelling. T...
متن کاملMental representations for musical meter.
Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates...
متن کاملJ Physiol Anthropol Appl Human Sci 24(1): 143–149, 2005
Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal synt...
متن کاملEye tracking for the online evaluation of prosody in speech synthesis: not so fast!
This paper presents an eye-tracking experiment comparing the processing of different accent patterns in unit selection synthesis and human speech. The synthetic speech results failed to replicate the facilitative effect of contextually appropriate accent patterns found with human speech, while producing a more robust intonational garden-path effect with contextually inappropriate patterns, both...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003